• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

Ȩ Ȩ > ¿¬±¸¹®Çå >

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) µµ½Ã ȯ°æ¿¡¼­ÀÇ À̹ÌÁö ºÐÇÒ ¸ðµ¨ ´ë»ó Àû´ëÀû ¹°¸® °ø°Ý ±â¹ý
¿µ¹®Á¦¸ñ(English Title) Adversarial Wall: Physical Adversarial Attack on Cityscape Pretrained Segmentation Model
ÀúÀÚ(Author) ¼ö·¬Åä ³ª¿ìÆÈ   ¶ó¶ó»çƼ Ç϶ó½ºÅ¸ ŸƼ¸¶   ±è¿ë¼ö   ±èÈ£¿ø   Naufal Suryanto   Harashta Tatimma Larasati   Yongsu Kim   Howon Kim  
¿ø¹®¼ö·Ïó(Citation) VOL 29 NO. 02 PP. 0402 ~ 0404 (2022. 11)
Çѱ۳»¿ë
(Korean Abstract)
¿µ¹®³»¿ë
(English Abstract)
Recent research has shown that deep learning models are vulnerable to adversarial attacks not only in the digital but also in the physical domain. This becomes very critical for applications that have a very high safety concern, such as self-driving cars. In this study, we propose a physical adversarial attack technique for one of the common tasks in self-driving cars, namely segmentation of the urban scene. Our method can create a texture on a wall so that it can be misclassified as a road. The demonstration of the technique on a state-of-the-art cityscape pretrained model shows a fairly high success rate, which should raise awareness of more potential attacks in self-driving cars.
Å°¿öµå(Keyword)
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå